Search Results

Documents authored by Santhanam, Rahul


Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111)

Authors: Anna Gál, Meena Mahajan, Rahul Santhanam, Till Tantau, and Manaswi Paraashar

Published in: Dagstuhl Reports, Volume 13, Issue 3 (2023)


Abstract
This report documents the program and activities of Dagstuhl Seminar 23111 "Computational Complexity of Discrete Problems", which was held in-person in March 2023 (the previous instance of the seminar series had been held online in March 2021). Following a description of the seminar’s objectives and its overall organization, this report lists the different major talks given during the seminar in alphabetical order of speakers, followed by the abstracts of the talks, including the main references and relevant sources where applicable. The return to an in-person setting allowed an intense atmosphere of active research and interaction throughout the five day seminar.

Cite as

Anna Gál, Meena Mahajan, Rahul Santhanam, Till Tantau, and Manaswi Paraashar. Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111). In Dagstuhl Reports, Volume 13, Issue 3, pp. 17-31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.13.3.17,
  author =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till and Paraashar, Manaswi},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111)}},
  pages =	{17--31},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{13},
  number =	{3},
  editor =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till and Paraashar, Manaswi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.13.3.17},
  URN =		{urn:nbn:de:0030-drops-192261},
  doi =		{10.4230/DagRep.13.3.17},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, lower bounds, randomness}
}
Document
An Algorithmic Approach to Uniform Lower Bounds

Authors: Rahul Santhanam

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
We propose a new family of circuit-based sampling tasks, such that non-trivial algorithmic solutions to certain tasks from this family imply frontier uniform lower bounds such as "NP is not in uniform ACC⁰" and "NP does not have uniform polynomial-size depth-two threshold circuits". Indeed, the most general versions of our sampling tasks have implications for central open problems such as NP vs P and PSPACE vs P. We argue the soundness of our approach by showing that the non-trivial algorithmic solutions we require do follow from standard cryptographic assumptions. In addition, we give evidence that a version of our approach for uniform circuits is necessary in order to separate NP from P or PSPACE from P. We give an algorithmic characterization for the PSPACE vs P question: PSPACE ≠ P iff either E has sub-exponential time non-uniform algorithms infinitely often or there are non-trivial space-efficient solutions to our sampling tasks for uniform Boolean circuits. We show how to use our framework to capture uniform versions of known non-uniform lower bounds, as well as classical uniform lower bounds such as the space hierarchy theorem and Allender’s uniform lower bound for the Permanent. We also apply our framework to prove new lower bounds: NP does not have polynomial-size uniform AC⁰ circuits with a bottom layer of MOD 6 gates, nor does it have polynomial-size uniform AC⁰ circuits with a bottom layer of threshold gates. Our proofs exploit recently defined probabilistic time-bounded variants of Kolmogorov complexity [Zhenjian Lu et al., 2022; Halley Goldberg et al., 2022; Halley Goldberg et al., 2022].

Cite as

Rahul Santhanam. An Algorithmic Approach to Uniform Lower Bounds. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 35:1-35:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.CCC.2023.35,
  author =	{Santhanam, Rahul},
  title =	{{An Algorithmic Approach to Uniform Lower Bounds}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{35:1--35:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.35},
  URN =		{urn:nbn:de:0030-drops-183053},
  doi =		{10.4230/LIPIcs.CCC.2023.35},
  annote =	{Keywords: Probabilistic Kolmogorov complexity, sampling algorithms, uniform lower bounds}
}
Document
Invited Talk
Why MCSP Is a More Important Problem Than SAT (Invited Talk)

Authors: Rahul Santhanam

Published in: LIPIcs, Volume 250, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)


Abstract
CNF Satisfiability (SAT) and its variants are generally considered the central problems in complexity theory, due to their applications in the theory of NP-completeness, logic, verification, probabilistically checkable proofs and parameterized complexity, among other areas. We challenge this conventional wisdom and argue that analysing the Minimum Circuit Size Problem (MCSP) and its relatives is more important from the perspective of fundamental problems in complexity theory, such as complexity lower bounds, minimal assumptions for cryptography, a robust theory of average-case complexity, and optimal results in hardness of approximation.

Cite as

Rahul Santhanam. Why MCSP Is a More Important Problem Than SAT (Invited Talk). In 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.FSTTCS.2022.2,
  author =	{Santhanam, Rahul},
  title =	{{Why MCSP Is a More Important Problem Than SAT}},
  booktitle =	{42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-261-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{250},
  editor =	{Dawar, Anuj and Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2022.2},
  URN =		{urn:nbn:de:0030-drops-173943},
  doi =		{10.4230/LIPIcs.FSTTCS.2022.2},
  annote =	{Keywords: Minimum Circuit Size Problem, Satisfiability, Cryptography, Learning, Approximation}
}
Document
On Randomized Reductions to the Random Strings

Authors: Michael Saks and Rahul Santhanam

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We study the power of randomized polynomial-time non-adaptive reductions to the problem of approximating Kolmogorov complexity and its polynomial-time bounded variants. As our first main result, we give a sharp dichotomy for randomized non-adaptive reducibility to approximating Kolmogorov complexity. We show that any computable language L that has a randomized polynomial-time non-adaptive reduction (satisfying a natural honesty condition) to ω(log(n))-approximating the Kolmogorov complexity is in AM ∩ coAM. On the other hand, using results of Hirahara [Shuichi Hirahara, 2020], it follows that every language in NEXP has a randomized polynomial-time non-adaptive reduction (satisfying the same honesty condition as before) to O(log(n))-approximating the Kolmogorov complexity. As our second main result, we give the first negative evidence against the NP-hardness of polynomial-time bounded Kolmogorov complexity with respect to randomized reductions. We show that for every polynomial t', there is a polynomial t such that if there is a randomized time t' non-adaptive reduction (satisfying a natural honesty condition) from SAT to ω(log(n))-approximating K^t complexity, then either NE = coNE or 𝖤 has sub-exponential size non-deterministic circuits infinitely often.

Cite as

Michael Saks and Rahul Santhanam. On Randomized Reductions to the Random Strings. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 29:1-29:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{saks_et_al:LIPIcs.CCC.2022.29,
  author =	{Saks, Michael and Santhanam, Rahul},
  title =	{{On Randomized Reductions to the Random Strings}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{29:1--29:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.29},
  URN =		{urn:nbn:de:0030-drops-165912},
  doi =		{10.4230/LIPIcs.CCC.2022.29},
  annote =	{Keywords: Kolmogorov complexity, randomized reductions}
}
Document
Track A: Algorithms, Complexity and Games
Learning Algorithms Versus Automatability of Frege Systems

Authors: Ján Pich and Rahul Santhanam

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We connect learning algorithms and algorithms automating proof search in propositional proof systems: for every sufficiently strong, well-behaved propositional proof system P, we prove that the following statements are equivalent, - Provable learning. P proves efficiently that p-size circuits are learnable by subexponential-size circuits over the uniform distribution with membership queries. - Provable automatability. P proves efficiently that P is automatable by non-uniform circuits on propositional formulas expressing p-size circuit lower bounds. Here, P is sufficiently strong and well-behaved if I.-III. holds: I. P p-simulates Jeřábek’s system WF (which strengthens the Extended Frege system EF by a surjective weak pigeonhole principle); II. P satisfies some basic properties of standard proof systems which p-simulate WF; III. P proves efficiently for some Boolean function h that h is hard on average for circuits of subexponential size. For example, if III. holds for P = WF, then Items 1 and 2 are equivalent for P = WF. The notion of automatability in Item 2 is slightly modified so that the automating algorithm outputs a proof of a given formula (expressing a p-size circuit lower bound) in p-time in the length of the shortest proof of a closely related but different formula (expressing an average-case subexponential-size circuit lower bound). If there is a function h ∈ NE∩ coNE which is hard on average for circuits of size 2^{n/4}, for each sufficiently big n, then there is an explicit propositional proof system P satisfying properties I.-III., i.e. the equivalence of Items 1 and 2 holds for P.

Cite as

Ján Pich and Rahul Santhanam. Learning Algorithms Versus Automatability of Frege Systems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 101:1-101:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{pich_et_al:LIPIcs.ICALP.2022.101,
  author =	{Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{Learning Algorithms Versus Automatability of Frege Systems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{101:1--101:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.101},
  URN =		{urn:nbn:de:0030-drops-164427},
  doi =		{10.4230/LIPIcs.ICALP.2022.101},
  annote =	{Keywords: learning algorithms, automatability, proof complexity}
}
Document
A Relativization Perspective on Meta-Complexity

Authors: Hanlin Ren and Rahul Santhanam

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
Meta-complexity studies the complexity of computational problems about complexity theory, such as the Minimum Circuit Size Problem (MCSP) and its variants. We show that a relativization barrier applies to many important open questions in meta-complexity. We give relativized worlds where: 1) MCSP can be solved in deterministic polynomial time, but the search version of MCSP cannot be solved in deterministic polynomial time, even approximately. In contrast, Carmosino, Impagliazzo, Kabanets, Kolokolova [CCC'16] gave a randomized approximate search-to-decision reduction for MCSP with a relativizing proof. 2) The complexities of MCSP[2^{n/2}] and MCSP[2^{n/4}] are different, in both worst-case and average-case settings. Thus the complexity of MCSP is not "robust" to the choice of the size function. 3) Levin’s time-bounded Kolmogorov complexity Kt(x) can be approximated to a factor (2+ε) in polynomial time, for any ε > 0. 4) Natural proofs do not exist, and neither do auxiliary-input one-way functions. In contrast, Santhanam [ITCS'20] gave a relativizing proof that the non-existence of natural proofs implies the existence of one-way functions under a conjecture about optimal hitting sets. 5) DistNP does not reduce to GapMINKT by a family of "robust" reductions. This presents a technical barrier for solving a question of Hirahara [FOCS'20].

Cite as

Hanlin Ren and Rahul Santhanam. A Relativization Perspective on Meta-Complexity. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 54:1-54:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.STACS.2022.54,
  author =	{Ren, Hanlin and Santhanam, Rahul},
  title =	{{A Relativization Perspective on Meta-Complexity}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{54:1--54:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.54},
  URN =		{urn:nbn:de:0030-drops-158646},
  doi =		{10.4230/LIPIcs.STACS.2022.54},
  annote =	{Keywords: meta-complexity, relativization, minimum circuit size problem}
}
Document
Errorless Versus Error-Prone Average-Case Complexity

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions. First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH. Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P. Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators.

Cite as

Shuichi Hirahara and Rahul Santhanam. Errorless Versus Error-Prone Average-Case Complexity. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 84:1-84:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ITCS.2022.84,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{Errorless Versus Error-Prone Average-Case Complexity}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{84:1--84:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.84},
  URN =		{urn:nbn:de:0030-drops-156803},
  doi =		{10.4230/LIPIcs.ITCS.2022.84},
  annote =	{Keywords: average-case complexity, instance checker, pseudorandomness}
}
Document
Excluding PH Pessiland

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string. In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators.

Cite as

Shuichi Hirahara and Rahul Santhanam. Excluding PH Pessiland. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 85:1-85:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ITCS.2022.85,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{Excluding PH Pessiland}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{85:1--85:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.85},
  URN =		{urn:nbn:de:0030-drops-156819},
  doi =		{10.4230/LIPIcs.ITCS.2022.85},
  annote =	{Keywords: average-case complexity, pseudorandomness, meta-complexity}
}
Document
RANDOM
On the Structure of Learnability Beyond P/Poly

Authors: Ninad Rajgopal and Rahul Santhanam

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
Motivated by the goal of showing stronger structural results about the complexity of learning, we study the learnability of strong concept classes beyond P/poly, such as PSPACE/poly and EXP/poly. We show the following: 1) (Unconditional Lower Bounds for Learning) Building on [Adam R. Klivans et al., 2013], we prove unconditionally that BPE/poly cannot be weakly learned in polynomial time over the uniform distribution, even with membership and equivalence queries. 2) (Robustness of Learning) For the concept classes EXP/poly and PSPACE/poly, we show unconditionally that worst-case and average-case learning are equivalent, that PAC-learnability and learnability over the uniform distribution are equivalent, and that membership queries do not help in either case. 3) (Reducing Succinct Search to Decision for Learning) For the decision problems R_{Kt} and R_{KS} capturing the complexity of learning EXP/poly and PSPACE/poly respectively, we show a succinct search to decision reduction: for each of these problems, the problem is in BPP iff there is a probabilistic polynomial-time algorithm computing circuits encoding proofs for positive instances of the problem. This is shown via a more general result giving succinct search to decision results for PSPACE, EXP and NEXP, which might be of independent interest. 4) (Implausibility of Oblivious Strongly Black-Box Reductions showing NP-hardness of learning NP/poly) We define a natural notion of hardness of learning with respect to oblivious strongly black-box reductions. We show that learning PSPACE/poly is PSPACE-hard with respect to oblivious strongly black-box reductions. On the other hand, if learning NP/poly is NP-hard with respect to oblivious strongly black-box reductions, the Polynomial Hierarchy collapses.

Cite as

Ninad Rajgopal and Rahul Santhanam. On the Structure of Learnability Beyond P/Poly. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 46:1-46:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rajgopal_et_al:LIPIcs.APPROX/RANDOM.2021.46,
  author =	{Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{On the Structure of Learnability Beyond P/Poly}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{46:1--46:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.46},
  URN =		{urn:nbn:de:0030-drops-147395},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.46},
  annote =	{Keywords: Hardness of Learning, Oracle Circuit Classes, Succinct Search, Black-Box Reductions}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121)

Authors: Anna Gál, Meena Mahajan, Rahul Santhanam, and Till Tantau

Published in: Dagstuhl Reports, Volume 11, Issue 2 (2021)


Abstract
This report documents the program and activities of Dagstuhl Seminar 21121 "Computational Complexity of Discrete Problems," which was held online in March 2021. Starting with a description of the organization of the online meeting and the topics covered, we then list the different talks given during the seminar in alphabetical order of speakers, followed by the abstracts of the talks, including the main references and relevant sources where applicable. Despite the fact that only a compressed daily time slot was available for the seminar with participants from time zones spanning the whole globe and despite the fact that informal discussions were harder to hold than in a typical on-site seminar, the rate of participation throughout the seminar was very high and many lively scientific debates were held.

Cite as

Anna Gál, Meena Mahajan, Rahul Santhanam, and Till Tantau. Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121). In Dagstuhl Reports, Volume 11, Issue 2, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.11.2.1,
  author =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121)}},
  pages =	{1--16},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2021},
  volume =	{11},
  number =	{2},
  editor =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.11.2.1},
  URN =		{urn:nbn:de:0030-drops-146836},
  doi =		{10.4230/DagRep.11.2.1},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, lower bounds, randomness}
}
Document
Hardness of KT Characterizes Parallel Cryptography

Authors: Hanlin Ren and Rahul Santhanam

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures: - We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NC⁰). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC⁰-computable one-way functions exist if and only if logspace-computable one-way functions exist. - Inspired by the above result, we present randomized average-case reductions among the NC¹-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity. - We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem. - We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution. - Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.

Cite as

Hanlin Ren and Rahul Santhanam. Hardness of KT Characterizes Parallel Cryptography. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 35:1-35:58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.CCC.2021.35,
  author =	{Ren, Hanlin and Santhanam, Rahul},
  title =	{{Hardness of KT Characterizes Parallel Cryptography}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{35:1--35:58},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.35},
  URN =		{urn:nbn:de:0030-drops-143091},
  doi =		{10.4230/LIPIcs.CCC.2021.35},
  annote =	{Keywords: one-way function, meta-complexity, KT complexity, parallel cryptography, randomized encodings}
}
Document
On the Pseudo-Deterministic Query Complexity of NP Search Problems

Authors: Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse.

Cite as

Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam. On the Pseudo-Deterministic Query Complexity of NP Search Problems. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.CCC.2021.36,
  author =	{Goldwasser, Shafi and Impagliazzo, Russell and Pitassi, Toniann and Santhanam, Rahul},
  title =	{{On the Pseudo-Deterministic Query Complexity of NP Search Problems}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.36},
  URN =		{urn:nbn:de:0030-drops-143104},
  doi =		{10.4230/LIPIcs.CCC.2021.36},
  annote =	{Keywords: Pseudo-determinism, Query complexity, Proof complexity}
}
Document
Circuit Lower Bounds from NP-Hardness of MCSP Under Turing Reductions

Authors: Michael Saks and Rahul Santhanam

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
The fundamental Minimum Circuit Size Problem is a well-known example of a problem that is neither known to be in 𝖯 nor known to be NP-hard. Kabanets and Cai [Kabanets and Cai, 2000] showed that if MCSP is NP-hard under "natural" m-reductions, superpolynomial circuit lower bounds for exponential time would follow. This has triggered a long line of work on understanding the power of reductions to MCSP. Nothing was known so far about consequences of NP-hardness of MCSP under general Turing reductions. In this work, we consider two structured kinds of Turing reductions: parametric honest reductions and natural reductions. The latter generalize the natural reductions of Kabanets and Cai to the case of Turing-reductions. We show that NP-hardness of MCSP under these kinds of Turing-reductions imply superpolynomial circuit lower bounds for exponential time.

Cite as

Michael Saks and Rahul Santhanam. Circuit Lower Bounds from NP-Hardness of MCSP Under Turing Reductions. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{saks_et_al:LIPIcs.CCC.2020.26,
  author =	{Saks, Michael and Santhanam, Rahul},
  title =	{{Circuit Lower Bounds from NP-Hardness of MCSP Under Turing Reductions}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{26:1--26:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.26},
  URN =		{urn:nbn:de:0030-drops-125786},
  doi =		{10.4230/LIPIcs.CCC.2020.26},
  annote =	{Keywords: Minimum Circuit Size Problem, Turing reductions, circuit lower bounds}
}
Document
Pseudorandomness and the Minimum Circuit Size Problem

Authors: Rahul Santhanam

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We explore the possibility of basing one-way functions on the average-case hardness of the fundamental Minimum Circuit Size Problem (MCSP[s]), which asks whether a Boolean function on n bits specified by its truth table has circuits of size s(n). 1) (Pseudorandomness from Zero-Error Average-Case Hardness) We show that for a given size function s, the following are equivalent: Pseudorandom distributions supported on strings describable by s(O(n))-size circuits exist; Hitting sets supported on strings describable by s(O(n))-size circuits exist; MCSP[s(O(n))] is zero-error average-case hard. Using similar techniques, we show that Feige’s hypothesis for random k-CNFs implies that there is a pseudorandom distribution (with constant error) supported entirely on satisfiable formulas. Underlying our results is a general notion of semantic sampling, which might be of independent interest. 2) (A New Conjecture) In analogy to a known universal construction of succinct hitting sets against arbitrary polynomial-size adversaries, we propose the Universality Conjecture: there is a universal construction of succinct pseudorandom distributions against arbitrary polynomial-size adversaries. We show that under the Universality Conjecture, the following are equivalent: One-way functions exist; Natural proofs useful against sub-exponential size circuits do not exist; Learning polynomial-size circuits with membership queries over the uniform distribution is hard; MCSP[2^(ε n)] is zero-error hard on average for some ε > 0; Cryptographic succinct hitting set generators exist. 3) (Non-Black-Box Results) We show that for weak circuit classes ℭ against which there are natural proofs [Alexander A. Razborov and Steven Rudich, 1997], pseudorandom functions secure against poly-size circuits in ℭ imply superpolynomial lower bounds in P against poly-size circuits in ℭ. We also show that for a certain natural variant of MCSP, there is a polynomial-time reduction from approximating the problem well in the worst case to solving it on average. These results are shown using non-black-box techniques, and in the first case we show that there is no black-box proof of the result under standard crypto assumptions.

Cite as

Rahul Santhanam. Pseudorandomness and the Minimum Circuit Size Problem. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 68:1-68:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.ITCS.2020.68,
  author =	{Santhanam, Rahul},
  title =	{{Pseudorandomness and the Minimum Circuit Size Problem}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{68:1--68:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.68},
  URN =		{urn:nbn:de:0030-drops-117532},
  doi =		{10.4230/LIPIcs.ITCS.2020.68},
  annote =	{Keywords: Minimum Circuit Size Problem, Pseudorandomness, Average-case Complexity, Natural Proofs, Universality Conjecture}
}
Document
Beyond Natural Proofs: Hardness Magnification and Locality

Authors: Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Hardness magnification reduces major complexity separations (such as EXP ⊈ NC^1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [Igor Carboni Oliveira and Rahul Santhanam, 2018; Dylan M. McKay et al., 2019; Lijie Chen and Roei Tell, 2019; Igor Carboni Oliveira et al., 2019; Lijie Chen et al., 2019; Igor Carboni Oliveira, 2019; Lijie Chen et al., 2019] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: - Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [Alexander A. Razborov and Steven Rudich, 1997]? - Can we adapt known lower bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem MCSP imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

Cite as

Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam. Beyond Natural Proofs: Hardness Magnification and Locality. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 70:1-70:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2020.70,
  author =	{Chen, Lijie and Hirahara, Shuichi and Oliveira, Igor C. and Pich, J\'{a}n and Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{Beyond Natural Proofs: Hardness Magnification and Locality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{70:1--70:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.70},
  URN =		{urn:nbn:de:0030-drops-117550},
  doi =		{10.4230/LIPIcs.ITCS.2020.70},
  annote =	{Keywords: Hardness Magnification, Natural Proofs, Minimum Circuit Size Problem, Circuit Lower Bounds}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121)

Authors: Anna Gál, Rahul Santhanam, and Till Tantau

Published in: Dagstuhl Reports, Volume 9, Issue 3 (2019)


Abstract
The following report archives the presentations and activities of the March 2019 Dagstuhl Seminar 19121 "Computational Complexity of Discrete Problems". Section 1 summarizes the topics and some specific results offered in selected talks during the course of the week. Section 2 provides a table of contents, listing each of the talks given in alphabetical order. Section 3 contains the abstracts, indicating both the main reference and other relevant sources (where applicable) to allow the reader to investigate the topics further.

Cite as

Anna Gál, Rahul Santhanam, and Till Tantau. Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121). In Dagstuhl Reports, Volume 9, Issue 3, pp. 64-82, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.9.3.64,
  author =	{G\'{a}l, Anna and Santhanam, Rahul and Tantau, Till},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121)}},
  pages =	{64--82},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{3},
  editor =	{G\'{a}l, Anna and Santhanam, Rahul and Tantau, Till},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.9.3.64},
  URN =		{urn:nbn:de:0030-drops-112920},
  doi =		{10.4230/DagRep.9.3.64},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, parametrisation, randomness}
}
Document
Parity Helps to Compute Majority

Authors: Igor Carboni Oliveira, Rahul Santhanam, and Srikanth Srinivasan

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We study the complexity of computing symmetric and threshold functions by constant-depth circuits with Parity gates, also known as AC^0[oplus] circuits. Razborov [Alexander A. Razborov, 1987] and Smolensky [Roman Smolensky, 1987; Roman Smolensky, 1993] showed that Majority requires depth-d AC^0[oplus] circuits of size 2^{Omega(n^{1/2(d-1)})}. By using a divide-and-conquer approach, it is easy to show that Majority can be computed with depth-d AC^0[oplus] circuits of size 2^{O~(n^{1/(d-1)})}. This gap between upper and lower bounds has stood for nearly three decades. Somewhat surprisingly, we show that neither the upper bound nor the lower bound above is tight for large d. We show for d >= 5 that any symmetric function can be computed with depth-d AC^0[oplus] circuits of size exp(O~(n^{2/3 * 1/(d-4)})). Our upper bound extends to threshold functions (with a constant additive loss in the denominator of the double exponent). We improve the Razborov-Smolensky lower bound to show that for d >= 3 Majority requires depth-d AC^0[oplus] circuits of size 2^{Omega(n^{1/(2d-4)})}. For depths d <= 4, we are able to refine our techniques to get almost-optimal bounds: the depth-3 AC^0[oplus] circuit size of Majority is 2^{Theta~(n^{1/2})}, while its depth-4 AC^0[oplus] circuit size is 2^{Theta~(n^{1/4})}.

Cite as

Igor Carboni Oliveira, Rahul Santhanam, and Srikanth Srinivasan. Parity Helps to Compute Majority. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 23:1-23:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2019.23,
  author =	{Oliveira, Igor Carboni and Santhanam, Rahul and Srinivasan, Srikanth},
  title =	{{Parity Helps to Compute Majority}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{23:1--23:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.23},
  URN =		{urn:nbn:de:0030-drops-108453},
  doi =		{10.4230/LIPIcs.CCC.2019.23},
  annote =	{Keywords: Computational Complexity, Boolean Circuits, Lower Bounds, Parity, Majority}
}
Document
Hardness Magnification near State-Of-The-Art Lower Bounds

Authors: Igor Carboni Oliveira, Ján Pich, and Rahul Santhanam

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
This work continues the development of hardness magnification. The latter proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful. We consider gap versions of the meta-computational problems MKtP and MCSP, where one needs to distinguish instances (strings or truth-tables) of complexity <= s_1(N) from instances of complexity >= s_2(N), and N = 2^n denotes the input length. In MCSP, complexity is measured by circuit size, while in MKtP one considers Levin’s notion of time-bounded Kolmogorov complexity. (In our results, the parameters s_1(N) and s_2(N) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for Gap-MKtP[s_1,s_2] and Gap-MCSP[s_1,s_2], a marginal improvement over the state-of-the-art in unconditional lower bounds in a variety of computational models would imply explicit super-polynomial lower bounds. Theorem. There exists a universal constant c >= 1 for which the following hold. If there exists epsilon > 0 such that for every small enough beta > 0 (1) Gap-MCSP[2^{beta n}/c n, 2^{beta n}] !in Circuit[N^{1 + epsilon}], then NP !subseteq Circuit[poly]. (2) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in TC^0[N^{1 + epsilon}], then EXP !subseteq TC^0[poly]. (3) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in B_2-Formula[N^{2 + epsilon}], then EXP !subseteq Formula[poly]. (4) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in U_2-Formula[N^{3 + epsilon}], then EXP !subseteq Formula[poly]. (5) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in BP[N^{2 + epsilon}], then EXP !subseteq BP[poly]. (6) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in (AC^0[6])[N^{1 + epsilon}], then EXP !subseteq AC^0[6]. These results are complemented by lower bounds for Gap-MCSP and Gap-MKtP against different models. For instance, the lower bound assumed in (1) holds for U_2-formulas of near-quadratic size, and lower bounds similar to (3)-(5) hold for various regimes of parameters. We also identify a natural computational model under which the hardness magnification threshold for Gap-MKtP lies below existing lower bounds: U_2-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with Gap-MKtP, then EXP !subseteq NC^1 would follow via hardness magnification.

Cite as

Igor Carboni Oliveira, Ján Pich, and Rahul Santhanam. Hardness Magnification near State-Of-The-Art Lower Bounds. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 27:1-27:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2019.27,
  author =	{Oliveira, Igor Carboni and Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{Hardness Magnification near State-Of-The-Art Lower Bounds}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{27:1--27:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.27},
  URN =		{urn:nbn:de:0030-drops-108494},
  doi =		{10.4230/LIPIcs.CCC.2019.27},
  annote =	{Keywords: Circuit Complexity, Minimum Circuit Size Problem, Kolmogorov Complexity}
}
Document
Expander-Based Cryptography Meets Natural Proofs

Authors: Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We introduce new forms of attack on expander-based cryptography, and in particular on Goldreich's pseudorandom generator and one-way function. Our attacks exploit low circuit complexity of the underlying expander's neighbor function and/or of the local predicate. Our two key conceptual contributions are: 1) We put forward the possibility that the choice of expander matters in expander-based cryptography. In particular, using expanders whose neighbour function has low circuit complexity might compromise the security of Goldreich's PRG and OWF in certain settings. 2) We show that the security of Goldreich's PRG and OWF is closely related to two other long-standing problems: Specifically, to the existence of unbalanced lossless expanders with low-complexity neighbor function, and to limitations on circuit lower bounds (i.e., natural proofs). In particular, our results further motivate the investigation of affine/local unbalanced lossless expanders and of average-case lower bounds against DNF-XOR circuits. We prove two types of technical results that support the above conceptual messages. First, we unconditionally break Goldreich's PRG when instantiated with a specific expander (whose existence we prove), for a class of predicates that match the parameters of the currently-best "hard" candidates, in the regime of quasi-polynomial stretch. Secondly, conditioned on the existence of expanders whose neighbor functions have extremely low circuit complexity, we present attacks on Goldreich's generator in the regime of polynomial stretch. As one corollary, conditioned on the existence of the foregoing expanders, we show that either the parameters of natural properties for several constant-depth circuit classes cannot be improved, even mildly; or Goldreich's generator is insecure in the regime of a large polynomial stretch, regardless of the predicate used.

Cite as

Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell. Expander-Based Cryptography Meets Natural Proofs. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{carbonioliveira_et_al:LIPIcs.ITCS.2019.18,
  author =	{Carboni Oliveira, Igor and Santhanam, Rahul and Tell, Roei},
  title =	{{Expander-Based Cryptography Meets Natural Proofs}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.18},
  URN =		{urn:nbn:de:0030-drops-101112},
  doi =		{10.4230/LIPIcs.ITCS.2019.18},
  annote =	{Keywords: Pseudorandom Generators, One-Way Functions, Expanders, Circuit Complexity}
}
Document
Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds

Authors: Ninad Rajgopal, Rahul Santhanam, and Srikanth Srinivasan

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
We give a deterministic algorithm for counting the number of satisfying assignments of any AC^0[oplus] circuit C of size s and depth d over n variables in time 2^(n-f(n,s,d)), where f(n,s,d) = n/O(log(s))^(d-1), whenever s = 2^o(n^(1/d)). As a consequence, we get that for each d, there is a language in E^{NP} that does not have AC^0[oplus] circuits of size 2^o(n^(1/(d+1))). This is the first lower bound in E^{NP} against AC^0[oplus] circuits that beats the lower bound of 2^Omega(n^(1/2(d-1))) due to Razborov and Smolensky for large d. Both our algorithm and our lower bounds extend to AC^0[p] circuits for any prime p.

Cite as

Ninad Rajgopal, Rahul Santhanam, and Srikanth Srinivasan. Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 78:1-78:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{rajgopal_et_al:LIPIcs.MFCS.2018.78,
  author =	{Rajgopal, Ninad and Santhanam, Rahul and Srinivasan, Srikanth},
  title =	{{Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{78:1--78:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.78},
  URN =		{urn:nbn:de:0030-drops-96607},
  doi =		{10.4230/LIPIcs.MFCS.2018.78},
  annote =	{Keywords: circuit satisfiability, circuit lower bounds, polynomial method, derandomization}
}
Document
Pseudo-Derandomizing Learning and Approximation

Authors: Igor Carboni Oliveira and Rahul Santhanam

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion - a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any {quasi-polynomial} time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time. Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudo-deterministically). In particular, this suggests a new approach to constructing hitting set generators against AC^0[p] circuits by giving a deterministic learning algorithm for AC^0[p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any poly-time randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0,1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs. Finally, we {investigate} the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudo-deterministic approximate canonizer for AC^0[p] computable in quasi-polynomial time.

Cite as

Igor Carboni Oliveira and Rahul Santhanam. Pseudo-Derandomizing Learning and Approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{carbonioliveira_et_al:LIPIcs.APPROX-RANDOM.2018.55,
  author =	{Carboni Oliveira, Igor and Santhanam, Rahul},
  title =	{{Pseudo-Derandomizing Learning and Approximation}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{55:1--55:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.55},
  URN =		{urn:nbn:de:0030-drops-94598},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.55},
  annote =	{Keywords: derandomization, learning, approximation, boolean circuits}
}
Document
Proof Complexity (Dagstuhl Seminar 18051)

Authors: Albert Atserias, Jakob Nordström, Pavel Pudlák, and Rahul Santhanam

Published in: Dagstuhl Reports, Volume 8, Issue 1 (2018)


Abstract
The study of proof complexity was initiated in [Cook and Reckhow 1979] as a way to attack the P vs.NP problem, and in the ensuing decades many powerful techniques have been discovered for analyzing different proof systems. Proof complexity also gives a way of studying subsystems of Peano Arithmetic where the power of mathematical reasoning is restricted, and to quantify how complex different mathematical theorems are measured in terms of the strength of the methods of reasoning required to establish their validity. Moreover, it allows to analyse the power and limitations of satisfiability algorithms (SAT solvers) used in industrial applications with formulas containing up to millions of variables. During the last 10--15 years the area of proof complexity has seen a revival with many exciting results, and new connections have also been revealed with other areas such as, e.g., cryptography, algebraic complexity theory, communication complexity, and combinatorial optimization. While many longstanding open problems from the 1980s and 1990s still remain unsolved, recent progress gives hope that the area may be ripe for decisive breakthroughs. This workshop, gathering researchers from different strands of the proof complexity community, gave opportunities to take stock of where we stand and discuss the way ahead.

Cite as

Albert Atserias, Jakob Nordström, Pavel Pudlák, and Rahul Santhanam. Proof Complexity (Dagstuhl Seminar 18051). In Dagstuhl Reports, Volume 8, Issue 1, pp. 124-157, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{atserias_et_al:DagRep.8.1.124,
  author =	{Atserias, Albert and Nordstr\"{o}m, Jakob and Pudl\'{a}k, Pavel and Santhanam, Rahul},
  title =	{{Proof Complexity (Dagstuhl Seminar 18051)}},
  pages =	{124--157},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2018},
  volume =	{8},
  number =	{1},
  editor =	{Atserias, Albert and Nordstr\"{o}m, Jakob and Pudl\'{a}k, Pavel and Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.8.1.124},
  URN =		{urn:nbn:de:0030-drops-92864},
  doi =		{10.4230/DagRep.8.1.124},
  annote =	{Keywords: bounded arithmetic, computational complexity, logic, proof complexity, satisfiability algorithms}
}
Document
NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits

Authors: Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds. The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [William J. Masek, 1979] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD_2. Our techniques extend to an NP-hardness result for MOD_m gates at the bottom layer under inputs from (Z / m Z)^n.

Cite as

Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam. NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 5:1-5:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2018.5,
  author =	{Hirahara, Shuichi and Oliveira, Igor C. and Santhanam, Rahul},
  title =	{{NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{5:1--5:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.5},
  URN =		{urn:nbn:de:0030-drops-88831},
  doi =		{10.4230/LIPIcs.CCC.2018.5},
  annote =	{Keywords: NP-hardness, Minimum Circuit Size Problem, depth-3 circuits}
}
Document
On the Average-Case Complexity of MCSP and Its Variants

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
We prove various results on the complexity of MCSP (Minimum Circuit Size Problem) and the related MKTP (Minimum Kolmogorov Time-Bounded Complexity Problem): * We observe that under standard cryptographic assumptions, MCSP has a pseudorandom self-reduction. This is a new notion we define by relaxing the notion of a random self-reduction to allow queries to be pseudorandom rather than uniformly random. As a consequence we derive a weak form of a worst-case to average-case reduction for (a promise version of) MCSP. Our result also distinguishes MCSP from natural NP-complete problems, which are not known to have worst-case to average-case reductions. Indeed, it is known that strong forms of worst-case to average-case reductions for NP-complete problems collapse the Polynomial Hierarchy. * We prove the first non-trivial formula size lower bounds for MCSP by showing that MCSP requires nearly quadratic-size De Morgan formulas. * We show average-case superpolynomial size lower bounds for MKTP against AC0[p] for any prime p. * We show the hardness of MKTP on average under assumptions that have been used in much recent work, such as Feige's assumptions, Alekhnovich's assumption and the Planted Clique conjecture. In addition, MCSP is hard under Alekhnovich's assumption. Using a version of Feige's assumption against co-nondeterministic algorithms that has been conjectured recently, we provide evidence for the first time that MKTP is not in coNP. Our results suggest that it might worthwhile to focus on the average-case hardness of MKTP and MCSP when approaching the question of whether these problems are NP-hard.

Cite as

Shuichi Hirahara and Rahul Santhanam. On the Average-Case Complexity of MCSP and Its Variants. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2017.7,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{On the Average-Case Complexity of MCSP and Its Variants}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.7},
  URN =		{urn:nbn:de:0030-drops-75406},
  doi =		{10.4230/LIPIcs.CCC.2017.7},
  annote =	{Keywords: minimum circuit size problem, average-case complexity, circuit lower bounds, time-bounded Kolmogorov complexity, hardness}
}
Document
Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness

Authors: Igor C. Carboni Oliveira and Rahul Santhanam

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show: Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{\omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}. Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{\omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP. Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0.

Cite as

Igor C. Carboni Oliveira and Rahul Santhanam. Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 18:1-18:49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2017.18,
  author =	{Oliveira, Igor C. Carboni and Santhanam, Rahul},
  title =	{{Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{18:1--18:49},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.18},
  URN =		{urn:nbn:de:0030-drops-75327},
  doi =		{10.4230/LIPIcs.CCC.2017.18},
  annote =	{Keywords: boolean circuits, learning theory, pseudorandomness}
}
Document
Exponential Time Paradigms Through the Polynomial Time Lens

Authors: Andrew Drucker, Jesper Nederlof, and Rahul Santhanam

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as branching and dynamic programming, and to shed light on the true complexity of various problems. As one instantiation, we model branching using the notion of witness compression, i.e., reducibility to the circuit satisfiability problem parameterized by the number of variables of the circuit. We show this is equivalent to the previously studied notion of `OPP-algorithms', and provide a technique for proving conditional lower bounds for witness compressions via a constructive variant of AND-composition, which is a notion previously studied in theory of preprocessing. In the context of parameterized complexity we use this to show that problems such as Pathwidth and Treewidth and Independent Set parameterized by pathwidth do not have witness compression, assuming NP subseteq coNP/poly. Since these problems admit fast fixed parameter tractable algorithms via dynamic programming, this shows that dynamic programming can be stronger than branching, under a standard complexity hypothesis. Our approach has applications outside parameterized complexity as well: for example, we show if a polynomial time algorithm outputs a maximum independent set of a given planar graph on n vertices with probability exp(-n^{1-epsilon}) for some epsilon>0, then NP subseteq coNP/poly. This negative result dims the prospects for one very natural approach to sub-exponential time algorithms for problems on planar graphs. As two other illustrations (more exploratory) of our approach, we model algorithms based on inclusion-exclusion or group algebras via the notion of "parity compression", and we model a subclass of dynamic programming algorithms with the notion of "disjunctive dynamic programming". These models give us a way to naturally classify various parameterized problems with FPT algorithms. In the case of the dynamic programming model, we show that Independent Set parameterized by pathwidth is complete for this model.

Cite as

Andrew Drucker, Jesper Nederlof, and Rahul Santhanam. Exponential Time Paradigms Through the Polynomial Time Lens. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{drucker_et_al:LIPIcs.ESA.2016.36,
  author =	{Drucker, Andrew and Nederlof, Jesper and Santhanam, Rahul},
  title =	{{Exponential Time Paradigms Through the Polynomial Time Lens}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.36},
  URN =		{urn:nbn:de:0030-drops-63871},
  doi =		{10.4230/LIPIcs.ESA.2016.36},
  annote =	{Keywords: exponential time paradigms, branching, dynamic programming, lower bounds}
}
Document
Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits

Authors: Ruiwen Chen, Rahul Santhanam, and Srikanth Srinivasan

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We show average-case lower bounds for explicit Boolean functions against bounded-depth threshold circuits with a superlinear number of wires. We show that for each integer d > 1, there is epsilon_d > 0 such that Parity has correlation at most 1/n^{Omega(1)} with depth-d threshold circuits which have at most n^{1+epsilon_d} wires, and the Generalized Andreev Function has correlation at most 1/2^{n^{Omega(1)}} with depth-d threshold circuits which have at most n^{1+epsilon_d} wires. Previously, only worst-case lower bounds in this setting were known [Impagliazzo/Paturi/Saks, SIAM J. Comp., 1997]. We use our ideas to make progress on several related questions. We give satisfiability algorithms beating brute force search for depth-$d$ threshold circuits with a superlinear number of wires. These are the first such algorithms for depth greater than 2. We also show that Parity cannot be computed by polynomial-size AC^0 circuits with n^{o(1)} general threshold gates. Previously no lower bound for Parity in this setting could handle more than log(n) gates. This result also implies subexponential-time learning algorithms for AC^0 with n^{o(1)} threshold gates under the uniform distribution. In addition, we give almost optimal bounds for the number of gates in a depth-d threshold circuit computing Parity on average, and show average-case lower bounds for threshold formulas ofany depth. Our techniques include adaptive random restrictions, anti-concentration and the structural theory of linear threshold functions, and bounded-read Chernoff bounds.

Cite as

Ruiwen Chen, Rahul Santhanam, and Srikanth Srinivasan. Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 1:1-1:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2016.1,
  author =	{Chen, Ruiwen and Santhanam, Rahul and Srinivasan, Srikanth},
  title =	{{Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{1:1--1:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.1},
  URN =		{urn:nbn:de:0030-drops-58447},
  doi =		{10.4230/LIPIcs.CCC.2016.1},
  annote =	{Keywords: threshold circuit, satisfiability algorithm, circuit lower bound}
}
Document
New Non-Uniform Lower Bounds for Uniform Classes

Authors: Lance Fortnow and Rahul Santhanam

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We strengthen the nondeterministic hierarchy theorem for non-deterministic polynomial time to show that the lower bound holds against sub-linear advice. More formally, we show that for any constants d and d' such that 1 <= d < d', and for any time-constructible bound t=o(n^d), there is a language in NTIME(n^d) which is not in NTIME(t)/n^{1/d'}. The best known earlier separation of Fortnow, Santhanam and Trevisan could only handle o(log(n)) bits of advice in the lower bound, and was not tight with respect to the time bounds. We generalize our hierarchy theorem to work for other syntactic complexity measures between polynomial time and polynomial space, including alternating polynomial time with any fixed number of alternations. We also use our technique to derive an almost-everywhere hierarchy theorem for non-deterministic classes which use a sub-linear amount of non-determinism, i.e., the lower bound holds on all but finitely many input lengths rather than just on infinitely many. As one application of our main result, we derive a new lower bound for NP against NP-uniform non-deterministic circuits of size O(n^k) for any fixed k. This result is a significant strengthening of a result of Kannan, which states that not all of NP can be solved with P-uniform circuits of size O(n^k) for any fixed k. As another application, we show strong non-uniform lower bounds for the complexity class RE of languages decidable in randomized linear exponential time with one sided error.

Cite as

Lance Fortnow and Rahul Santhanam. New Non-Uniform Lower Bounds for Uniform Classes. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fortnow_et_al:LIPIcs.CCC.2016.19,
  author =	{Fortnow, Lance and Santhanam, Rahul},
  title =	{{New Non-Uniform Lower Bounds for Uniform Classes}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.19},
  URN =		{urn:nbn:de:0030-drops-58503},
  doi =		{10.4230/LIPIcs.CCC.2016.19},
  annote =	{Keywords: Computational complexity, nondeterminism, nonuniform complexity}
}
Document
Majority is Incompressible by AC^0[p] Circuits

Authors: Igor Carboni Oliveira and Rahul Santhanam

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
We consider C-compression games, a hybrid model between computational and communication complexity. A C-compression game for a function f:{0,1}^n -> {0,1} is a two-party communication game, where the first party Alice knows the entire input x but is restricted to use strategies computed by C-circuits, while the second party Bob initially has no information about the input, but is computationally unbounded. The parties implement an interactive communication protocol to decide the value of f(x), and the communication cost of the protocol is the maximum number of bits sent by Alice as a function of n = |x|. We show that any AC_d[p]-compression protocol to compute Majority_n requires communication n / (log(n))^(2d + O(1)), where p is prime, and AC_d[p] denotes polynomial size unbounded fan-in depth-d Boolean circuits extended with modulo p gates. This bound is essentially optimal, and settles a question of Chattopadhyay and Santhanam (2012). This result has a number of consequences, and yields a tight lower bound on the total fan-in of oracle gates in constant-depth oracle circuits computing Majority_n. We define multiparty compression games, where Alice interacts in parallel with a polynomial number of players that are not allowed to communicate with each other, and communication cost is defined as the sum of the lengths of the longest messages sent by Alice during each round. In this setting, we prove that the randomized r-round AC^0[p]-compression cost of Majority_n is n^(Theta(1/r)). This result implies almost tight lower bounds on the maximum individual fan-in of oracle gates in certain restricted bounded-depth oracle circuits computing Majority_n. Stronger lower bounds for functions in NP would separate NP from NC^1. Finally, we consider the round separation question for two-party AC-compression games, and significantly improve known separations between r-round and (r+1)-round protocols, for any constant r.

Cite as

Igor Carboni Oliveira and Rahul Santhanam. Majority is Incompressible by AC^0[p] Circuits. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 124-157, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2015.124,
  author =	{Oliveira, Igor Carboni and Santhanam, Rahul},
  title =	{{Majority is Incompressible by AC^0\lbrackp\rbrack Circuits}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{124--157},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.124},
  URN =		{urn:nbn:de:0030-drops-50658},
  doi =		{10.4230/LIPIcs.CCC.2015.124},
  annote =	{Keywords: interactive communication, compression, circuit lower bound}
}
Document
Optimal algorithms and proofs (Dagstuhl Seminar 14421)

Authors: Olaf Beyersdorff, Edward A. Hirsch, Jan Krajicek, and Rahul Santhanam

Published in: Dagstuhl Reports, Volume 4, Issue 10 (2015)


Abstract
This report documents the programme and the outcomes of the Dagstuhl Seminar 14421 "Optimal algorithms and proofs". The seminar brought together researchers working in computational and proof complexity, logic, and the theory of approximations. Each of these areas has its own, but connected notion of optimality; and the main aim of the seminar was to bring together researchers from these different areas, for an exchange of ideas, techniques, and open questions, thereby triggering new research collaborations across established research boundaries.

Cite as

Olaf Beyersdorff, Edward A. Hirsch, Jan Krajicek, and Rahul Santhanam. Optimal algorithms and proofs (Dagstuhl Seminar 14421). In Dagstuhl Reports, Volume 4, Issue 10, pp. 51-68, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{beyersdorff_et_al:DagRep.4.10.51,
  author =	{Beyersdorff, Olaf and Hirsch, Edward A. and Krajicek, Jan and Santhanam, Rahul},
  title =	{{Optimal algorithms and proofs (Dagstuhl Seminar 14421)}},
  pages =	{51--68},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{10},
  editor =	{Beyersdorff, Olaf and Hirsch, Edward A. and Krajicek, Jan and Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.10.51},
  URN =		{urn:nbn:de:0030-drops-48923},
  doi =		{10.4230/DagRep.4.10.51},
  annote =	{Keywords: computational complexity, proof complexity, approximation algorithms, optimal algorithms, optimal proof systems, speedup theorems}
}
Document
Stronger Lower Bounds and Randomness-Hardness Trade-Offs Using Associated Algebraic Complexity Classes

Authors: Maurice Jansen and Rahul Santhanam

Published in: LIPIcs, Volume 14, 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)


Abstract
We associate to each Boolean language complexity class C the algebraic class a.C consisting of families of polynomials {f_n} for which the evaluation problem over the integers is in C. We prove the following lower bound and randomness-to-hardness results: 1. If polynomial identity testing (PIT) is in NSUBEXP then a.NEXP does not have poly size constant-free arithmetic circuits. 2. a.NEXP^RP does not have poly size constant-free arithmetic circuits. 3. For every fixed k, a.MA does not have arithmetic circuits of size n^k. Items 1 and 2 strengthen two results due to (Kabanets and Impagliazzo, 2004). The third item improves a lower bound due to (Santhanam, 2009). We consider the special case low-PIT of identity testing for (constant-free) arithmetic circuits with low formal degree, and give improved hardness-to-randomness trade-offs that apply to this case. Combining our results for both directions of the hardness-randomness connection, we demonstrate a case where derandomization of PIT and proving lower bounds are equivalent. Namely, we show that low-PIT is in i.o-NTIME[2^{n^{o(1)}}]/n^{o(1)} if and only if there exists a family of multilinear polynomials in a.NE/lin that requires constant-free arithmetic circuits of super-polynomial size and formal degree.

Cite as

Maurice Jansen and Rahul Santhanam. Stronger Lower Bounds and Randomness-Hardness Trade-Offs Using Associated Algebraic Complexity Classes. In 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 14, pp. 519-530, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.STACS.2012.519,
  author =	{Jansen, Maurice and Santhanam, Rahul},
  title =	{{Stronger Lower Bounds and Randomness-Hardness Trade-Offs Using Associated Algebraic Complexity Classes}},
  booktitle =	{29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)},
  pages =	{519--530},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-35-4},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{14},
  editor =	{D\"{u}rr, Christoph and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2012.519},
  URN =		{urn:nbn:de:0030-drops-34307},
  doi =		{10.4230/LIPIcs.STACS.2012.519},
  annote =	{Keywords: Computational Complexity, Circuit Lower Bounds, Polynomial Identity Testing, Derandomization}
}
Document
Unconditional Lower Bounds against Advice

Authors: Harry Buhrman, Lance Fortnow, and Rahul Santhanam

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
We show several unconditional lower bounds for exponential time classes against polynomial time classes with advice, including: (1) For any constant c, NEXP not in P^{NP[n^c]} (2) For any constant c, MAEXP not in MA/n^c (3) BPEXP not in BPP/n^{o(1)}. It was previously unknown even whether NEXP in NP/n^{0.01}. For the probabilistic classes, no lower bounds for uniform exponential time against advice were known before. We also consider the question of whether these lower bounds can be made to work on almost all input lengths rather than on infinitely many. We give an oracle relative to which NEXP in i.o.NP, which provides evidence that this is not possible with current techniques.

Cite as

Harry Buhrman, Lance Fortnow, and Rahul Santhanam. Unconditional Lower Bounds against Advice. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:DagSemProc.09421.8,
  author =	{Buhrman, Harry and Fortnow, Lance and Santhanam, Rahul},
  title =	{{Unconditional Lower Bounds against Advice}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--11},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.8},
  URN =		{urn:nbn:de:0030-drops-24112},
  doi =		{10.4230/DagSemProc.09421.8},
  annote =	{Keywords: Advice, derandomization, diagonalization, lower bounds, semantic classes}
}
Document
Fractional Pebbling and Thrifty Branching Programs

Authors: Mark Braverman, Stephen Cook, Pierre McKenzie, Rahul Santhanam, and Dustin Wehr

Published in: LIPIcs, Volume 4, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (2009)


Abstract
We study the branching program complexity of the {\em tree evaluation problem}, introduced in \cite{BrCoMcSaWe09} as a candidate for separating \nl\ from\logcfl. The input to the problem is a rooted, balanced $d$-ary tree of height$h$, whose internal nodes are labelled with $d$-ary functions on$[k]=\{1,\ldots,k\}$, and whose leaves are labelled with elements of $[k]$.Each node obtains a value in $[k]$ equal to its $d$-ary function applied to the values of its $d$ children. The output is the value of the root. Deterministic $k$-way branching programs as related to black pebbling algorithms have been studied in \cite{BrCoMcSaWe09}. Here we introduce the notion of {\em fractional pebbling} of graphs to study non-deterministicbranching program size. We prove that this yields non-deterministic branching programs with $\Theta(k^{h/2+1})$ states solving the Boolean problem ``determine whether the root has value 1'' for binary trees - this isasymptotically better than the branching program size corresponding toblack-white pebbling. We prove upper and lower bounds on the fractionalpebbling number of $d$-ary trees, as well as a general result relating thefractional pebbling number of a graph to the black-white pebbling number. We introduce a simple semantic restriction called {\em thrifty} on $k$-way branching programs solving tree evaluation problems and show that the branchingprogram size bound of $\Theta(k^h)$ is tight (up to a constant factor) for all $h\ge 2$ for deterministic thrifty programs. We show that thenon-deterministic branching programs that correspond to fractional pebbling are thrifty as well, and that the bound of $\Theta(k^{h/2+1})$ is tight for non-deterministic thrifty programs for $h=2,3,4$. We hypothesise that thrifty branching programs are optimal among $k$-way branching programs solving the tree evaluation problem - proving this for deterministic programs would separate \lspace\ from \logcfl\, and proving it for non-deterministic programs would separate \nl\ from \logcfl.

Cite as

Mark Braverman, Stephen Cook, Pierre McKenzie, Rahul Santhanam, and Dustin Wehr. Fractional Pebbling and Thrifty Branching Programs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 4, pp. 109-120, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.FSTTCS.2009.2311,
  author =	{Braverman, Mark and Cook, Stephen and McKenzie, Pierre and Santhanam, Rahul and Wehr, Dustin},
  title =	{{Fractional Pebbling and Thrifty Branching Programs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science},
  pages =	{109--120},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-13-2},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{4},
  editor =	{Kannan, Ravi and Narayan Kumar, K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2009.2311},
  URN =		{urn:nbn:de:0030-drops-23111},
  doi =		{10.4230/LIPIcs.FSTTCS.2009.2311},
  annote =	{Keywords: Branching programs, space complexity, tree evaluation, pebbling}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail